University of Texas at San Antonio



**Open Cloud Institute**


Machine Learning/BigData EE-6973-001-Fall-2016


**Paul Rad, Ph.D.**

**Ali Miraftab, Research Fellow**



**Real Time Image Classification using DNN through ROS for Drones and Ground Robot**


Karthik Pai Haradi, Abhijith Ravikumar Puthussery
*Open Cloud Institute, University of Texas at San Antonio, San Antonio, Texas, USA*
{dxq821, qaw164}@my.utsa.edu



**Dataset:** The image data can be found in http://www.vision.caltech.edu/Image_Datasets/Caltech101/. The dataset contains pictures of objects belonging to 101 categories, and about 40 to 800 images per category.

**Outcome:** Adding another layer of Intelligence to the Kobuki Turtlebot by giving it an Image Identification and classification capability.

**Project Definition:** Using Convoluted Neural Networks, we train our Network to recognize different classes of images. This is done with the provided training sets which have images with labels, classifying objects into 101 classes. Other training sets are available freely online which we may decide to use later on. We can design and configure this Convoluted Neural Network using TensorFlow, which is the current standard for Machine Learning Languages.

Once this Intelligence is designed and trained, we can create a ROS Package containing the algorithm. ROS (Robot Operating System) is an interface to communicate effectively with Robots and a ROS package can take in different inputs such as Test images from other external sources. This ROS Package can be run on the Kobuki Ground Robot, which feeds it with test images taken by its onboard camera, and we can analyze the results of the trained Neural Network.

With the project defined above, we aim to add an extension to the visual Intelligence of the ground robot. Currently, the robot does not have in-built intelligence to identify images or classify them, and with this project, we aim to add the feature to the robot.